robust person re-identification
Reviews: Learning Disentangled Representation for Robust Person Re-identification
This paper describes an approach to person re-identification that uses a generative model to effectively disentangle feature representations into identity-related and identity-unrelated aspects. The proposed technique uses an Identity-shuffling GAN (IS-GAN) that learns to reconstruct person images from paired latent representations even when the identity-specific representation is shuffled and paired with identity-unrelated representation from a different person. Experimental results are given on the main datasets in use today: CUHK03, Market-1501, and DukeMTMC. The paper is very well-written and the technical development is concise but clear. There are a *ton* of moving parts in the proposed approach, but I feel like the results would be reproducible with minimal head scratching from the written description.
Learning Disentangled Representation for Robust Person Re-identification
We address the problem of person re-identification (reID), that is, retrieving person images from a large dataset, given a query image of the person of interest. The key challenge is to learn person representations robust to intra-class variations, as different persons can have the same attribute and the same person's appearance looks different with viewpoint changes. Recent reID methods focus on learning discriminative features but robust to only a particular factor of variations (e.g., human pose) and this requires corresponding supervisory signals (e.g., pose annotations). To tackle this problem, we propose to disentangle identity-related and -unrelated features from person images. Identity-related features contain information useful for specifying a particular person (e.g.,clothing), while identity-unrelated ones hold other factors (e.g., human pose, scale changes).
Reviews: FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification
This paper describes a GAN approach to addressing a common and important problem in person re-identification: inter- and intra-view pose variation. The technique, in extreme synthesis, uses a generative model to implicitly marginalize away pose- and background-dependent information in the feature representaiton to distill a representation that is invariant to both, but still discriminative for person identities. Pose is represented as the spatial configuration of landmarks, and during training person images conditioned on a randomly selected pose are generated from image encodings. These generated images are fed to multiple adversarial discriminators that determine if the generated image is real/false, if the pose in a real/fake image is accurate, and if two feature embeddings correspond to the same person. Experimental results are given on multiple, important benchmark datasets and show significant improvement over the state-of-the-art. Clarity, quality, and reproducibility: The clarity of exposition is quite good.
Learning Disentangled Representation for Robust Person Re-identification
We address the problem of person re-identification (reID), that is, retrieving person images from a large dataset, given a query image of the person of interest. The key challenge is to learn person representations robust to intra-class variations, as different persons can have the same attribute and the same person's appearance looks different with viewpoint changes. Recent reID methods focus on learning discriminative features but robust to only a particular factor of variations (e.g., human pose) and this requires corresponding supervisory signals (e.g., pose annotations). To tackle this problem, we propose to disentangle identity-related and -unrelated features from person images. Identity-related features contain information useful for specifying a particular person (e.g.,clothing), while identity-unrelated ones hold other factors (e.g., human pose, scale changes).